Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Microservice identification method based on class dependencies under resource constraints
SHAO Jianwei, LIU Qiqun, WANG Huanqiang, CHEN Yaowang, YU Dongjin, SALAMAT Boranbaev
Journal of Computer Applications    2020, 40 (12): 3604-3611.   DOI: 10.11772/j.issn.1001-9081.2020040495
Abstract332)      PDF (1213KB)(379)       Save
To effectively improve the automation level of legacy software system reconstruction based on the microservice architecture, according to the principle that there is a certain correlation between resource data operated by two classes with dependencies, a microservice identification method based on class dependencies under resource constraints was proposed. Firstly, the class dependency graph was built based on the class dependencies in the legacy software program, and the resource entity label for each class was set. Then, a dividing algorithm was designed for the class dependency graph based on the resource entity label, which was used to divide the original software system and obtain the candidate microservices. Finally, the candidate microservices with higher dependency degrees were combined to obtain the final microservice set. Experimental results based on four open source projects from GitHub demonstrate that, the proposed method achieves the microservice division accuracy of higher than 90%, which proves that it is reasonable and effective to identify microservices by considering both class dependencies and resource constraints.
Reference | Related Articles | Metrics
CNN quantization and compression strategy for edge computing applications
CAI Ruichu, ZHONG Chunrong, YU Yang, CHEN Bingfeng, LU Ye, CHEN Yao
Journal of Computer Applications    2018, 38 (9): 2449-2454.   DOI: 10.11772/j.issn.1001-9081.2018020477
Abstract1812)      PDF (944KB)(1109)       Save
Focused on the problem that the memory and computational resource intensive nature of Convolutional Neural Network (CNN) limits the adoption of CNN on embedded devices such as edge computing, a convolutional neural network compression method combining network weight pruning and data quantization for embedded hardware platform data types was proposed. Firstly, according to the weights distribution of each layer of the original CNN, a threshold based pruning method was illustrated to eliminate the weights that have less impact on the network processing accuracy. The redundant information in the network model was removed while the important connections were preserved. Secondly, the required bit-width of the weights and activation functions were analyzed based on the computational characteristics of the embedded platform, and the dynamic fixed-point quantization method was employed to reduce the bit-width of the network model. Finally, the network was fine-tuned to further compress the model size and reduce the computational consumption while ensuring the accuracy of model inference. The experimental results show that this method reduces the network storage space of VGG-19 by over 22 times while reducing the accuracy by only 0.3%, which achieves almost lossless compression. Meanwhile, by evaluating on multiple models, this method can reduce the storage space of the network model by a maximum of 25 times within the range of average accuracy lose of 1.46%, which proves the effective compression of the proposed method.
Reference | Related Articles | Metrics
Scheduling strategy of value evaluation for output-event of actor based on cyber-physical system
ZHANG Jing, CHEN Yao, FAN Hongbo, SUN Jun
Journal of Computer Applications    2017, 37 (6): 1663-1669.   DOI: 10.11772/j.issn.1001-9081.2017.06.1663
Abstract419)      PDF (1059KB)(614)       Save
The performances and correctness of system are affected by the state transition real-time process of the cyber-physical system. In order to solve the problem, aiming at the state transition process of actor's output-event driven system, a new scheduling strategy of value evaluation for output-event of actor named Value Evaluation-Information Entropy and Quality of Data (VE-IE&QoD) was proposed. Firstly, the real-time performance of event was expressed through the super dense time model. The self-information of the output-event, the information entropy of the actor and the quality of data were defined as the function indexes of value evaluation. Then, the value evaluation mission was executed for the process of the actor in performing task and it was considered about suitably increasing the weighting coefficient for parametric equation. Finally, the discrete event models which contain the proposed VE-IE&QoD scheduling strategy, the traditional Earliest Deadline First (EDF) scheduling algorithm and Information Entropy * (IE *) scheduling strategy were built by Ptolemy Ⅱ platform. The operation situation of different algorithm models was analyzed, the change of value evaluation and execution time of different algorithm models were compared. The experimental results show that, the VE-IE&QoD scheduling strategy can reduce the system average execution time, improve the memory usage efficiency and task value evaluation. The proposed VE-IE&QoD scheduling strategy can improve the system performance and correctness to some extent.
Reference | Related Articles | Metrics
Approach of large matrix multiplication based on Hadoop
SHUN Yuanshuai CHEN Yao GUAN Xinjun LIN Chen
Journal of Computer Applications    2013, 33 (12): 3339-3344.  
Abstract819)      PDF (1071KB)(625)       Save
Large and very large matrix cannot be dealt by current matrix multiplication algorithms. With the development of MapReduce programming frame, parallel programs have become the main approaches for matrix computing. The matrix multiplication algorithms based on MapReduce were summarized, and an improved strategy for large matrix was proposed, which had a tradeoff in the data volume between the computation on single work node and the network transmission. The experimental results prove that the parallel algorithms outperform the traditional ones on the large matrix, and the performance will improve with the increase of the clusters.
Related Articles | Metrics
Lossless image compression coding method based on static dictionary
GAO Jian SONG Ao LIU Wan CHEN Yao
Journal of Computer Applications    2011, 31 (06): 1578-1580.   DOI: 10.3724/SP.J.1087.2011.01578
Abstract1058)      PDF (472KB)(376)       Save
In combination with the thoughts of former pixel predictive coding method and Lempel-Ziv-Welch (LZW) coding, in order to tackle the problem of low efficiency on signal compression of signals with high changing frequency, a sort of lossless image compression coding method was proposed. In this method, the correlation between pixels of the picture was used to construct a static dictionary, and the image could be compressed losslessly by looking and coding the data which was formerly predictive coded. The experiment results show that the proposed method is easy to realize and achieves higher compression efficiency than LZW algorithm and WinZIP algorithm.
Related Articles | Metrics
Using directed graph based BDMM algorithm for Chinese word segmentation
CHEN Yao-dong, WANG Ting
Journal of Computer Applications    2005, 25 (06): 1442-1444.   DOI: 10.3724/SP.J.1087.2005.01442
Abstract1141)      PDF (163KB)(1167)       Save
Chinese word segmentation is one of the fundamental key techniques for Chinese Information Processing. In this paper, the authors firstly studied current segmentation algorithms, then, modifid the traditional Maximum Match (MM) algorithm. With the consideration of both word-coverage rate and sentence-coverage rate, a character Directed Graph with ambiguity mark was implemented for searching all possible segmentation sequences. This method compared with the classic MM algorithms and omni-segmentation algorithm and the experiment result shows that the Directed Graph based algorithm can achieve higher coverage rate and lower complexity.
Related Articles | Metrics